Releases: transformerlab/transformerlab-api
Releases · transformerlab/transformerlab-api
v0.17.0
What's Changed
- Test the install.sh with shellcheck by @aliasaria in #303
- Part 1: Added Exporter Logging Functionality by @aahaanmaini in #287
- Part 2: Added Exporter Logging Functionality by @aahaanmaini in #288
- Added ByteDance Seed Coder Model by @Sourenm in #307
- Fix samsum dataset path by @mina-parham in #308
- Adding support for AMD ROCm by @deep1401 in #259
- Small changes to not break AMD when building from source on a dev machine by @deep1401 in #313
New Contributors
- @mina-parham made their first contribution in #308
Full Changelog: v0.16.2...v0.17.0
v0.16.2
v0.16.1
What's Changed
- test workflow db by @aliasaria in #284
- Switch from nvidia-ml-py3 to nvidia-ml-py by @deep1401 in #285
- Fix broken pytest import by @deep1401 in #291
- Added the ability to pull model group gallery from galleries on startup by @Sourenm in #293
- Install.sh uses reqs file from RUN_DIR if available otherwise fallback to TLAB_CODE_DIR by @deep1401 in #290
- Fix single file names being fetched from .tlab_markitdown by @deep1401 in #292
- Update dataset gallery to match galleries one by @deep1401 in #295
- Remove Unused Requirements from all requirements files by @deep1401 in #298
- Update to local model-group-gallery.json by @Sourenm in #301
New Contributors
Full Changelog: v0.16.0...v0.16.1
v0.16.0
What's Changed
- Upgrade wandb to avoid all protobuf 6.xx errors by @deep1401 in #271
- Adding MLX PPO Trainer by @deep1401 in #269
- add our first basic tests for the api by @aliasaria in #272
- Set manual context length for fallback on MLX Server by @deep1401 in #273
- Move API tests to test client instead of running a live server by @deep1401 in #275
- Rewrote exporter plugins to use plugin SDK by @aahaanmaini in #274
- Initial changes to support CUDA 12.8 (required for NVIDIA 50-series GPUs) by @aliasaria in #263
- Fix pytests installing dependencies by @deep1401 in #278
- Fix -h flag in run.sh caused by changing from HOST to TLABHOST by @deep1401 in #279
- Add back tests for install and server run by @deep1401 in #280
- Add option to load models in 4-bit for unsloth bnb models by @deep1401 in #282
- Fix failing imports on exporter plugins by @deep1401 in #276
- Fix updating names of tasks by @deep1401 in #283
New Contributors
- @aahaanmaini made their first contribution in #274
Full Changelog: v0.15.3...v0.16.0
v0.15.3
v0.15.2
What's Changed
- Fix run async mode on LLM Judge and Red Teaming param for attack by @deep1401 in #260
- Change package from fschat to transformerlab-inference by @deep1401 in #265
- Rename data gallery to prevent local local init errors by @deep1401 in #262
- Pin DeepEval and add Langchain dependencies wherever necessary by @deep1401 in #264
- Plugin install API returns an error message on failure. by @dadmobile in #261
- Separate builds for MacOS and CPU-only devices by @deep1401 in #267
Full Changelog: v0.15.1...v0.15.2
v0.15.1
What's Changed
- Move dataset gallery to be remote by @deep1401 in #256
- Add/allow manual install of plugin setup by @aliasaria in #257
- Move inference server model logs to a custom directory by @deep1401 in #258
- Fix RAG Reindexing and install the plugin if not already installed by @deep1401 in #250
Full Changelog: v0.15.0...v0.15.1
v0.15.0
What's New
- Comprehensive Qwen 3 Support across:
- MLX Inference
- MLX Training
- MLX Export
- CUDA Inference powered by Huggingface Transformers
- SFT Training (single and multi GPU)
- LoRA Training
- GRPO Training (single and multi GPU)
- Unsloth GRPO Trainer
- DPO, ORPO, SIMPO Training
- and more
- Custom dtypes for FastChat and vLLM allows for support for more graphics cards
- And more!
What's Changed
- Changes to FastChat and vLLM to work with custom dtypes by @deep1401 in #245
- MLX Server supports Gemma3 by @dadmobile in #243
- Add model delete from hf cache logic by @deep1401 in #249
- Upgrade MLX-LM to 22.1 by @aliasaria in #251
- Add Qwen3 to the model gallery by @aliasaria in #252
- Upgrade MLX and add support for Qwen3 in Trainer and Exporter by @deep1401 in #254
- Add Qwen3 Support to GGUF Exporter. by @aliasaria in #255
- Add Qwen3 support to LlamaTrainer plugin by @aliasaria in #253
- Llama CPP Upgrade by @deep1401 in #248
Full Changelog: v0.14.1...v0.15.0
v0.14.1
v0.14.0
What's Changed
- Change from Miniconda to Miniforge by @brunodoamaral in #203
- Upgrade transformers to 4.51.3 by @deep1401 in #229
- Fix sys.executable executions within plugins by @deep1401 in #227
- Fixed: GPU did not access : GPU shows NaN by @kolithawarnakulasooriya in #232
- Add support for models stored using XET storage on HF Hub by @deep1401 in #233
- Upgrade Pydantic things from .dict() to .model_dump() by @deep1401 in #234
- refactor workflows runner by @sanjaycal in #236
New Contributors
- @brunodoamaral made their first contribution in #203
- @kolithawarnakulasooriya made their first contribution in #232
Full Changelog: v0.13.5...v0.14.0